Practical 1

Aim of this practical:

In this first practical we are going to look at some simple models

  1. A Gaussian model with simulated data
  2. A Linear mixed model
  3. A GLM model with random effects

we are going to learn:

Linear Model

In this practical we will:

  • Simulate Gaussian data
  • Learn how to fit a linear model with inlabru
  • Generate predictions from the model

Start by loading useful libraries:

library(dplyr)
library(INLA)
library(ggplot2)
library(patchwork)
library(inlabru)     
# load some libraries to generate nice plots
library(scico)

As our first example we consider a simple linear regression model with Gaussian observations \[ y_i\sim\mathcal{N}(\mu_i, \sigma^2), \qquad i = 1,\dots,N \]

where \(\sigma^2\) is the observation error, and the mean parameter \(\mu_i\) is linked to the linear predictor (\(\eta_i\)) through an identity function: \[ \eta_i = \mu_i = \beta_0 + \beta_1 x_i \] where \(x_i\) is a covariate and \(\beta_0, \beta_1\) are parameters to be estimated. We assign \(\beta_0\) and \(\beta_1\) a vague Gaussian prior.

To finalize the Bayesian model we assign a \(\text{Gamma}(a,b)\) prior to the precision parameter \(\tau = 1/\sigma^2\) and two independent Gaussian priors with mean \(0\) and precision \(\tau_{\beta}\) to the regression parameters \(\beta_0\) and \(\beta_1\) (we will use the default prior settings in INLA for now).

Tip Question

What is the dimension of the hyperparameter vector and latent Gaussian field?

The hyperparameter vector has dimension 1, \(\pmb{\theta} = (\tau)\) while the latent Gaussian field \(\pmb{u} = (\beta_0, \beta_1)\) has dimension 2, \(0\) mean, and sparse precision matrix:

\[ \pmb{Q} = \begin{bmatrix} \tau_{\beta_0} & 0\\ 0 & \tau_{\beta_1} \end{bmatrix} \] Note that, since \(\beta_0\) and \(\beta_1\) are fixed effects, the precision parameters \(\tau_{\beta_0}\) and \(\tau_{\beta_1}\) are fixed.

Note

We can write the linear predictor vector \(\pmb{\eta} = (\eta_1,\dots,\eta_N)\) as

\[ \pmb{\eta} = \pmb{A}\pmb{u} = \pmb{A}_1\pmb{u}_1 + \pmb{A}_2\pmb{u}_2 = \begin{bmatrix} 1 \\ 1\\ \vdots\\ 1 \end{bmatrix} \beta_0 + \begin{bmatrix} x_1 \\ x_2\\ \vdots\\ x_N \end{bmatrix} \beta_1 \]

Our linear predictor consists then of two components: an intercept and a slope.

Simulate example data

First, we simulate data from the model

\[ y_i\sim\mathcal{N}(\eta_i,0.1^2), \ i = 1,\dots,100 \]

with

\[ \eta_i = \beta_0 + \beta_1 x_i \]

where \(\beta_0 = 2\), \(\beta_1 = 0.5\) and the values of the covariate \(x\) are generated from an Uniform(0,1) distribution. The simulated response and covariate data are then saved in a data.frame object.

Simulate Data from a LM
beta = c(2,0.5)
sd_error = 0.1

n = 100
x = rnorm(n)
y = beta[1] + beta[2] * x + rnorm(n, sd = sd_error)

df = data.frame(y = y, x = x)  

Fitting a linear regression model with inlabru


Defining model components

The model has two parameters to be estimated \(\beta_1\) and \(\beta_2\). We need to define the two corresponding model components.

Warning Task

Define an object called cmp that includes and (i) intercept beta_0 and (ii) a covariate x linear effect beta_1.

The cmp object is here used to define model components. We can give them any useful names we like, in this case, beta_0 and beta_1. You can remove the automatic intercept construction by adding a -1 in the components

Code
cmp =  ~ -1 + beta_0(1) + beta_1(x, model = "linear")
Note

Note that we have excluded the default Intercept term in the model by typing -1 in the model components. However, inlabru has automatic intercept that can be called by typing Intercept() , which is one of inlabru special names and it is used to define a global intercept, e.g.

cmp =  ~  Intercept(1) + beta_1(x, model = "linear")

Observation model construction

The next step is to construct the observation model by defining the model likelihood. The most important inputs here are the formula, the family and the data.

Warning Task

Define a linear predictor eta using the component labels you have defined on the previous task.

The eta object defines how the components should be combined in order to define the model predictor.

Code
eta = y ~ beta_0 + beta_1

The likelihood for the observational model is defined using the bru_obs() function.

Warning Task

Define the observational model likelihood in an object called lik using the bru_obs() function.

The bru_obs is expecting three arguments:

  • The linear predictor eta we defined in the previous task
  • The data likelihood (this can be specified by setting family = "gaussian")
  • The data set df
Code
lik =  bru_obs(formula = eta,
            family = "gaussian",
            data = df)

Fit the model

We fit the model using the bru() functions which takes as input the components and the observation model:

fit.lm = bru(cmp, lik)

Extract results

The summary() function will give access to some basic information about model fit and estimates

summary(fit.lm)
## inlabru version: 2.13.0.9011 
## INLA version: 25.09.19 
## Components: 
## Latent components:
## beta_0: main = linear(1)
## beta_1: main = linear(x)
## Observation models: 
##   Family: 'gaussian'
##     Tag: <No tag>
##     Data class: 'data.frame'
##     Response class: 'numeric'
##     Predictor: y ~ beta_0 + beta_1
##     Additive/Linear: TRUE/TRUE
##     Used components: effects[beta_0, beta_1], latent[] 
## Time used:
##     Pre = 1.06, Running = 0.286, Post = 0.0185, Total = 1.36 
## Fixed effects:
##         mean   sd 0.025quant 0.5quant 0.975quant  mode kld
## beta_0 2.006 0.01      1.987    2.006      2.026 2.006   0
## beta_1 0.491 0.01      0.471    0.491      0.511 0.491   0
## 
## Model hyperparameters:
##                                           mean    sd 0.025quant 0.5quant
## Precision for the Gaussian observations 102.55 14.50      76.14   101.87
##                                         0.975quant   mode
## Precision for the Gaussian observations     132.89 100.50
## 
## Marginal log-Likelihood:  67.19 
##  is computed 
## Posterior summaries for the linear predictor and the fitted values are computed
## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')

We can see that both the intercept and slope and the error precision are correctly estimated.

Generate model predictions


Now we can take the fitted bru object and use the predict function to produce predictions for \(\mu\) given a new set of values for the model covariates or the original values used for the model fit

new_data = data.frame(x = c(df$x, runif(10)),
                      y = c(df$y, rep(NA,10)))
pred = predict(fit.lm, new_data, ~ beta_0 + beta_1,
               n.samples = 1000)

The predict function generate samples from the fitted model. In this case we set the number of samples to 1000.

Data and 95% credible intervals
Code
pred %>% ggplot() + 
  geom_point(aes(x,y), alpha = 0.3) +
  geom_line(aes(x,mean)) +
  geom_line(aes(x, q0.025), linetype = "dashed")+
  geom_line(aes(x, q0.975), linetype = "dashed")+
  xlab("Covariate") + ylab("Observations")
Warning Task

Generate predictions for a new observation with \(x_0 = 0.45\)

You can create a new data frame containing the new observation \(x_0\) and then use the predict function.

Code
new_data = data.frame(x = 0.45)
pred = predict(fit.lm, new_data, ~ beta_0 + beta_1,
               n.samples = 1000)

Linear Mixed Model

In this practical we will:

  • Understand the basic structure of a Linear Mixed Model (LLM)
  • Simulate data from a LMM
  • Learn how to fit a LMM with inlabru and predict from the model.

Consider the a simple linear regression model except with the addition that the data that comes in groups. Suppose that we want to include a random effect for each group \(j\) (equivalent to adding a group random intercept). The model is then: \[ y_{ij} = \beta_0 + \beta_1 x_i + u_j + \epsilon_{ij} ~~~ \text{for}~i = 1,\ldots,N~ \text{and}~ j = 1,\ldots,m. \]

Here the random group effect is given by the variable \(u_j \sim \mathcal{N}(0, \tau^{-1}_u)\) with \(\tau_u = 1/\sigma^2_u\) describing the variability between groups (i.e., how much the group means differ from the overall mean). Then, \(\epsilon_j \sim \mathcal{N}(0, \tau^{-1}_\epsilon)\) denotes the residuals of the model and \(\tau_\epsilon = 1/\sigma^2_\epsilon\) captures how much individual observations deviate from their group mean (i.e., variability within group).

The model design matrix for the random effect has one row for each observation (this is equivalent to a random intercept model). The row of the design matrix associated with the \(ij\)-th observation consists of zeros except for the element associated with \(u_j\), which has a one.

\[ \pmb{\eta} = \pmb{A}\pmb{u} = \pmb{A}_1\pmb{u}_1 + \pmb{A}_2\pmb{u}_2 + \pmb{A}_3\pmb{u}_3 \]

NoteSupplementary material: LMM as a LGM

In matrix form, the linear mixed model for the j-th group can be written as:

\[ \overbrace{\mathbf{y}_j}^{ N \times 1} = \overbrace{X_j}^{ N \times 2} \underbrace{\beta}_{1\times 1} + \overbrace{Z_j}^{n_j \times 1} \underbrace{u_j}_{1\times1} + \overbrace{\epsilon_j}^{n_j \times 1}, \]

In a latent Gaussian model (LGM) formulation the mixed model predictor for the i-th observation can be written as :

\[ \eta_i = \beta_0 + \beta_1 x_i + \sum_k^K f_k(u_j) \]

where \(f_k(u_j) = u_j\) since there’s only one random effect per group (i.e., a random intercept for group \(j\)). The fixed effects \((\beta_0,\beta_1)\) are assigned Gaussian priors (e.g., \(\beta \sim \mathcal{N}(0,\tau_\beta^{-1})\)). The random effects \(\mathbf{u} = (u_1,\ldots,u_m)^T\) follow a Gaussian density \(\mathcal{N}(0,\mathbf{Q}_u^{-1})\) where \(\mathbf{Q}_u = \tau_u\mathbf{I}_m\) is the precision matrix for the random intercepts. Then, the components for the LGM are the following:

  • Latent field given by

    \[ \begin{bmatrix} \beta \\\mathbf{u} \end{bmatrix} \sim \mathcal{N}\left(\mathbf{0},\begin{bmatrix}\tau_\beta^{-1}\mathbf{I}_2&\mathbf{0}\\\mathbf{0} &\tau_u^{-1}\mathbf{I}_m\end{bmatrix}\right) \]

  • Likelihood:

    \[ y_i \sim \mathcal{N}(\eta_i,\tau_{\epsilon}^{-1}) \]

  • Hyperparameters:

    • \(\tau_u\sim\mathrm{Gamma}(a,b)\)
    • \(\tau_\epsilon \sim \mathrm{Gamma}(c,d)\)

Simulate example data

set.seed(12)
beta = c(1.5,1)
sd_error = 1
tau_group = 1

n = 100
n.groups = 5
x = rnorm(n)
v = rnorm(n.groups, sd = tau_group^{-1/2})
y = beta[1] + beta[2] * x + rnorm(n, sd = sd_error) +
  rep(v, each = 20)

df = data.frame(y = y, x = x, j = rep(1:5, each = 20))  

Note that inlabru expects an integer indexing variable to label the groups.

Code
ggplot(df) +
  geom_point(aes(x = x, colour = factor(j), y = y)) +
  theme_classic() +
  scale_colour_discrete("Group")

Data for the linear mixed model example with 5 groups

Fitting a LMM in inlabru


Defining model components and observational model

In order to specify this model we must use the group argument to tell inlabru which variable indexes the groups. The model = "iid" tells INLA that the groups are independent from one another.

# Define model components
cmp =  ~ -1 + beta_0(1) + beta_1(x, model = "linear") +
  u(j, model = "iid")

The group variable is indexed by column j in the dataset. We have chosen to name this component v() to connect with the mathematical notation that we used above.

# Construct likelihood
lik =  bru_obs(formula = y ~.,
            family = "gaussian",
            data = df)

Fitting the model

The model can be fitted exactly as in the previous examples by using the bru function with the components and likelihood objects.

fit = bru(cmp, lik)
summary(fit)
## inlabru version: 2.13.0.9011 
## INLA version: 25.09.19 
## Components: 
## Latent components:
## beta_0: main = linear(1)
## beta_1: main = linear(x)
## u: main = iid(j)
## Observation models: 
##   Family: 'gaussian'
##     Tag: <No tag>
##     Data class: 'data.frame'
##     Response class: 'numeric'
##     Predictor: y ~ .
##     Additive/Linear: TRUE/TRUE
##     Used components: effects[beta_0, beta_1, u], latent[] 
## Time used:
##     Pre = 0.932, Running = 0.193, Post = 0.0344, Total = 1.16 
## Fixed effects:
##         mean    sd 0.025quant 0.5quant 0.975quant  mode kld
## beta_0 2.108 0.438      1.229    2.108      2.986 2.108   0
## beta_1 1.172 0.120      0.936    1.172      1.407 1.172   0
## 
## Random effects:
##   Name     Model
##     u IID model
## 
## Model hyperparameters:
##                                          mean    sd 0.025quant 0.5quant
## Precision for the Gaussian observations 0.995 0.144      0.738    0.986
## Precision for u                         1.613 1.060      0.369    1.356
##                                         0.975quant  mode
## Precision for the Gaussian observations       1.30 0.971
## Precision for u                               4.35 0.918
## 
## Marginal log-Likelihood:  -179.93 
##  is computed 
## Posterior summaries for the linear predictor and the fitted values are computed
## (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')

Model predictions

To compute model predictions we can create a data.frame containing a range of values of covariate where we want the response to be predicted for each group. Then we simply call the predict function while specifying the model components.

LMM fitted values
# New data
xpred = seq(range(x)[1], range(x)[2], length.out = 100)
j = 1:n.groups
pred_data = expand.grid(x = xpred, j = j)
pred = predict(fit, pred_data, formula = ~ beta_0 + beta_1 + u) 


pred %>%
  ggplot(aes(x=x,y=mean,color=factor(j)))+
  geom_line()+
  geom_ribbon(aes(x,ymin = q0.025, ymax= q0.975,fill=factor(j)), alpha = 0.5) + 
  geom_point(data=df,aes(x=x,y=y,colour=factor(j)))+
  facet_wrap(~j)

Tip Question

Suppose that we are also interested in including random slopes into our model. Assuming intercept and slopes are independent, can your write down the linear predictor and the components of this model as a LGM?

In general, the mixed model predictor can decomposed as:

\[ \pmb{\eta} = X\beta + Z\mathbf{u} \]

Where \(X\) is a \(n \times p\) design matrix and \(\beta\) the corresponding p-dimensional vector of fixed effects. Then \(Z\) is a \(n\times q_J\) design matrix for the \(q_J\) random effects and \(J\) groups; \(\mathbf{v}\) is then a \(q_J \times 1\) vector of \(q\) random effects for the \(J\) groups. In a latent Gaussian model (LGM) formulation this can be written as:

\[ \eta_i = \beta_0 + \sum\beta_j x_{ij} + \sum_k f(k) (u_{ij}) \]

  • The linear predictor is given by

    \[ \eta_i = \beta_0 + \beta_1x_i + u_{0j} + u_{1j}x_i \]

  • Latent field defined by:

    • \(\beta \sim \mathcal{N}(0,\tau_\beta^{-1})\)

    • \(\mathbf{u}_j = \begin{bmatrix}u_{0j} \\ u_{1j}\end{bmatrix}, \mathbf{u}_j \sim \mathcal{N}(\mathbf{0},\mathbf{Q}_u^{-1})\) where the precision matrix is a block-diagonal matrix with entries \(\mathbf{Q}_u= \begin{bmatrix}\tau_{u_0} & {0} \\{0} & \tau_{u_1}\end{bmatrix}\)

  • The hyperparameters are then:

    • \(\tau_{u_0},\tau_{u_1} \text{and}~\tau_\epsilon\)

To fit this model in inlabru we can simply modify the model components as follows:

cmp =  ~ -1 + beta_0(1) + beta_1(x, model = "linear") +
  u0(j, model = "iid") + u1(j,x, model = "iid")

Generalized Linear Model

In this practical we will:

  • Simulate non-Gaussian data
  • Learn how to fit a generalised linear model with inlabru
  • Generate predictions from the model

A generalised linear model allows for the data likelihood to be non-Gaussian. In this example we have a discrete response variable which we model using a Poisson distribution. Thus, we assume that our data \[ y_i \sim \text{Poisson}(\lambda_i) \] with rate parameter \(\lambda_i\) which, using a log link, has associated predictor \[ \eta_i = \log \lambda_i = \beta_0 + \beta_1 x_i \] with parameters \(\beta_0\) and \(\beta_1\), and covariate \(x\).

Simulate example data

This code generates 100 samples of covariate x and data y.

set.seed(123)
n = 100
beta = c(1,1)
x = rnorm(n)
lambda = exp(beta[1] + beta[2] * x)
y = rpois(n, lambda  = lambda)
df = data.frame(y = y, x = x)  

Fitting a GLM in inlabru


Define model components

The predictor here only contains only 2 components (Intercept and Slope).

WarningTask

Define an object called cmp that includes and (i) intercept beta_0 and (ii) a covariate x linear effect beta_1.

Code
cmp =  ~ -1 + beta_0(1) + beta_1(x, model = "linear")

Define linear predictor

WarningTask

Define a linear predictor eta using the component labels you have defined on the previous task.

Code
eta = y ~ beta_0 + beta_1

Build observational model

When building the observation model likelihood we must now specify the Poisson likelihood using the family argument (the default link function for this family is the \(\log\) link).

lik =  bru_obs(formula = eta,
            family = "poisson",
            data = df)

Fit the model

Once the likelihood object is constructed, fitting the model is exactly the same process, we just need to specify the model components and the observational model, and pass this on to the bru function:

fit_glm = bru(cmp, lik)

And model summaries can be viewed using

summary(fit_glm)
inlabru version: 2.13.0.9011 
INLA version: 25.09.19 
Components: 
Latent components:
beta_0: main = linear(1)
beta_1: main = linear(x)
Observation models: 
  Family: 'poisson'
    Tag: <No tag>
    Data class: 'data.frame'
    Response class: 'integer'
    Predictor: y ~ beta_0 + beta_1
    Additive/Linear: TRUE/TRUE
    Used components: effects[beta_0, beta_1], latent[] 
Time used:
    Pre = 0.847, Running = 0.194, Post = 0.00638, Total = 1.05 
Fixed effects:
        mean    sd 0.025quant 0.5quant 0.975quant  mode kld
beta_0 0.915 0.071      0.775    0.915      1.054 0.915   0
beta_1 1.048 0.056      0.938    1.048      1.157 1.048   0

Marginal log-Likelihood:  -204.02 
 is computed 
Posterior summaries for the linear predictor and the fitted values are computed
(Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')

Generate model predictions


To generate new predictions we must provide a data frame that contains the covariate values for \(x\) at which we want to predict.

This code block generates predictions for the data we used to fit the model (contained in df$x) as well as 10 new covariate values sampled from a uniform distribution runif(10).

# Define new data, set to NA the values for prediction

new_data = data.frame(x = c(df$x, runif(10)),
                      y = c(df$y, rep(NA,10)))

# Define predictor formula
pred_fml <- ~ exp(beta_0 + beta_1)

# Generate predictions
pred_glm <- predict(fit_glm, new_data, pred_fml)

Since we used a log link (which is the default for family = "poisson"), we want to predict the exponential of the predictor. We specify this using a general R expression using the formula syntax.

Note

Note that the predict function will call the component names (i.e. the “labels”) that were decided when defining the model.

Since the component definition is looking for a covariate named \(x\), all we need to provide is a data frame that contains one, and the software does the rest.

Data and 95% credible intervals
pred_glm %>% ggplot() + 
  geom_point(aes(x,y), alpha = 0.3) +
  geom_line(aes(x,mean)) +
    geom_ribbon(aes(x = x, ymax = q0.975, ymin = q0.025),fill = "tomato", alpha = 0.3)+
  xlab("Covariate") + ylab("Observations (counts)")
WarningTask

Suppose a binary response such that

\[ \begin{aligned} y_i &\sim \mathrm{Bernoulli}(\psi_i)\\ \eta_i &= \mathrm{logit}(\psi_i) = \alpha_0 +\alpha_1 \times w_i \end{aligned} \] Using the following simulated data, use inlabru to fit the logistic regression above. Then, plot the predictions for the data used to fit the model along with 10 new covariate values

set.seed(123)
n = 100
alpha = c(0.5,1.5)
w = rnorm(n)
psi = plogis(alpha[1] + alpha[2] * w)
y = rbinom(n = n, size = 1, prob =  psi) # set size = 1 to draw binary observations
df_logis = data.frame(y = y, w = w)  

Here we use the logit link function \(\mathrm{logit}(x) = \log\left(\frac{x}{1-x}\right)\) (plogis() function in R) to link the linear predictor to the probabilities \(\psi\).

You can set family = "binomial" for binary responses and the plogis() function for computing the predicted values.

Note

The Bernoulli distribution is equivalent to a \(\mathrm{Binomial}(1, \psi)\) pmf. If you have proportional data (e.g. no. successes/no. trials) you can specify the number of events as your response and then the number of trials via the Ntrials = n argument of the bru_obs function (where n is the known vector of trials in your data set).

Code
# Model components
cmp_logis =  ~ -1 + alpha_0(1) + alpha_1(w, model = "linear")
# Model likelihood
lik_logis =  bru_obs(formula = y ~.,
            family = "binomial",
            data = df_logis)
# fit the model
fit_logis <- bru(cmp_logis,lik_logis)

# Define data for prediction
new_data = data.frame(w = c(df_logis$w, runif(10)),
                      y = c(df_logis$y, rep(NA,10)))
# Define predictor formula
pred_fml <- ~ plogis(alpha_0 + alpha_1)

# Generate predictions
pred_logis <- predict(fit_logis, new_data, pred_fml)

# Plot predictions
pred_logis %>% ggplot() + 
  geom_point(aes(w,y), alpha = 0.3) +
  geom_line(aes(w,mean)) +
    geom_ribbon(aes(x = w, ymax = q0.975, ymin = q0.025),fill = "tomato", alpha = 0.3)+
  xlab("Covariate") + ylab("Observations")